12 research outputs found

    Developing the Dance Jockey system for musical interaction with the Xsens MVN suit

    Get PDF
    In this paper we present the Dance Jockey System, a system developed for using a full body inertial motion capture suit (Xsens MVN) in music/dance performances. We present different strategies for extracting relevant postures and actions from the continuous data, and how these postures and actions can be used to control sonic and musical features. The system has been used in several public performances, and we believe it has great potential for further exploration. However, to overcome the current practical and technical challenges when working with the system, it is important to further refine tools and software in order to facilitate making of new performance pieces. Proceedings of the 12th International Conference on New Interfaces for Musical Expression. University of Michigan Press 2012 ISBN 978-0-9855720-1-3

    Methods and Technologies for Using Body Motion for Real-Time Musical Interaction

    Get PDF
    There are several strong indications for a profound connection between musical sound and body motion. Musical embodiment, meaning that our bodies play an important role in how we experience and understand music, has become a well accepted concept in music cognition. Today there are increasing numbers of new motion capture (MoCap) technologies that enable us to incorporate the paradigm of musical embodiment into computer music. This thesis focuses on some of the challenges involved in designing such systems. That is, how can we design digital musical instruments that utilize MoCap systems to map motion to sound? The first challenge encountered when wanting to use body motion for musical interaction is to find appropriate MoCap systems. Given the wide availability of different systems, it has been important to investigate the strengths and weaknesses of such technologies. This thesis includes evaluations of two of the technologies available: an optical marker-based system known as OptiTrack V100:R2; and an inertial sensor-based system known as the Xsens MVN suit. Secondly, to make good use of the raw MoCap data from the above technologies, it is often necessary to process them in different ways. This thesis presents a review and suggestions towards best practices for processing MoCap data in real time. As a result, several novel methods and filters that are applicable for processing MoCap data for real-time musical interaction are presented in this thesis. The most reasonable processing approach was found to be utilizing digital filters that are designed and evaluated in the frequency domain. To determine the frequency content of MoCap data, a frequency analysis method has been developed. An experiment that was carried out to determine the typical frequency content of free hand motion is also presented. Most remarkably, it has been necessary to design filters with low time delay, which is an important feature for real-time musical interaction. To be able to design such filters, it was necessary to develop an alternative filter design method. The resulting noise filters and differentiators are more low-delay optimal than than those produced by the established filter design methods. Finally, the interdisciplinary challenge of making good couplings between motion and sound has been targeted through the Dance Jockey project. During this project, a system was developed that has enabled the use of a full-body inertial motion capture suit, the Xsens MVN suit, in music/dance performances. To my knowledge, this is one of the first attempts to use a full body MoCap suit for musical interaction, and the presented system has demonstrated several hands-on solutions for how such data can be used to control sonic and musical features. The system has been used in several public performances, and the conceptual motivation, development details and experience of using the system are presented

    Using IR Optical Marker Based Motion Capture for Exploring Musical Interaction

    Get PDF
    The paper presents a conceptual overview of how optical infrared marker based motion capture systems (IrMoCap) can be used in musical interaction. First we present a review of related work of using IrMoCap for musical control. This is followed by a discussion of possible features which can be exploited. Finally, the question of mapping movement features to sound features is presented and discussed

    A Study of the Noise-Level in Two Infrared Marker-Based Motion Capture Systems

    Get PDF
    With musical applications in mind, this paper reports on the level of noise observed in two commercial infrared marker-based motion capture systems: one high-end (Qualisys) and one affordable (OptiTrack). We have tested how various features (calibration volume, marker size, sampling frequency, etc.) influence the noise level of markers lying still, and fixed to subjects standing still. The conclusion is that the motion observed in humans standing still is usually considerably higher than the noise level of the systems. Dependent on the system and its calibration, however, the signal-to-noise-ratio may in some cases be problematic. Proceedings of the 9th Sound and Music Computing Conference, Copenhagen, Denmark, 11-14 July, 201

    Comparing Inertial and Optical MoCap Technologies for Synthesis Control

    No full text
    This paper compares the use of two different technologies for controlling sound synthesis in real time: the infrared marker-based motion capture system OptiTrack and Xsens MVN, an inertial sensor-based motion capture suit. We present various quantitative comparisons between the data from the two systems and results from an experiment where a musician performed simple musical tasks with the two systems. Both systems are found to have their strengths and weaknesses, which we will present and discuss

    OSC Implementation and Evaluation of the Xsens MVN suit

    Get PDF
    The paper presents research about implementing a full body inertial motion capture system, the Xsens MVN suit, for musical interaction. Three di erent approaches for streaming real time and prerecorded motion capture data with Open Sound Control have been implemented. Furthermore, we present technical performance details and our experience with the motion capture system in realistic practice. Part of Proceedings of the International Conference on New Interfaces for Musical Expression 2011 http://urn.nb.no/URN:NBN:no-2936

    Proceedings of the International Conference on New Interfaces for Musical Expression

    Get PDF
    Editors: Alexander Refsum Jensenius, Anders Tveit, Rolf Inge Godøy, Dan Overholt Table of Contents -Tellef Kvifte: Keynote Lecture 1: Musical Instrument User Interfaces: the Digital Background of the Analog Revolution - page 1 -David Rokeby: Keynote Lecture 2: Adventures in Phy-gital Space - page 2 -Sergi Jordà: Keynote Lecture 3: Digital Lutherie and Multithreaded Musical Performance: Artistic, Scientific and Commercial Perspectives - page 3 Paper session A — Monday 30 May 11:00–12:30 -Dan Overholt: The Overtone Fiddle: an Actuated Acoustic Instrument - page 4 -Colby Leider, Matthew Montag, Stefan Sullivan and Scott Dickey: A Low-Cost, Low-Latency Multi-Touch Table with Haptic Feedback for Musical Applications - page 8 -Greg Shear and Matthew Wright: The Electromagnetically Sustained Rhodes Piano - page 14 -Laurel Pardue, Christine Southworth, Andrew Boch, Matt Boch and Alex Rigopulos: Gamelan Elektrika: An Electronic Balinese Gamelan - page 18 -Jeong-Seob Lee and Woon Seung Yeo: Sonicstrument: A Musical Interface with Stereotypical Acoustic Transducers - page 24 Poster session B— Monday 30 May 13:30–14:30 -Scott Smallwood: Solar Sound Arts: Creating Instruments and Devices Powered by Photovoltaic Technologies - page 28 -Niklas Klügel, Marc René Frieß and Georg Groh: An Approach to Collaborative Music Composition - page 32 -Nicolas Gold and Roger Dannenberg: A Reference Architecture and Score Representation for Popular Music Human-Computer Music Performance Systems - page 36 -Mark Bokowiec: V’OCT (Ritual): An Interactive Vocal Work for Bodycoder System and 8 Channel Spatialization - page 40 -Florent Berthaut, Haruhiro Katayose, Hironori Wakama, Naoyuki Totani and Yuichi Sato: First Person Shooters as Collaborative Multiprocess Instruments - page 44 -Tilo Hähnel and Axel Berndt: Studying Interdependencies in Music Performance: An Interactive Tool - page 48 -Sinan Bokesoy and Patrick Adler: 1city 1001vibrations: development of a interactive sound installation with robotic instrument performance - page 52 -Tim Murray-Browne, Di Mainstone, Nick Bryan-Kinns and Mark D. Plumbley:The medium is the message: Composing instruments and performing mappings - page 56 -Seunghun Kim, Luke Keunhyung Kim, Songhee Jeong and Woon Seung Yeo: Clothesline as a Metaphor for a Musical Interface - page 60 -Pietro Polotti and Maurizio Goina: EGGS in action - page 64 -Berit Janssen: A Reverberation Instrument Based on Perceptual Mapping - page 68 -Lauren Hayes: Vibrotactile Feedback-Assisted Performance - page 72 -Daichi Ando: Improving User-Interface of Interactive EC for Composition-Aid by means of Shopping Basket Procedure - page 76 -Ryan McGee, Yuan-Yi Fan and Reza Ali: BioRhythm: a Biologically-inspired Audio-Visual Installation - page 80 -Jon Pigott: Vibration, Volts and Sonic Art: A practice and theory of electromechanical sound - page 84 -George Sioros and Carlos Guedes: Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator - page 88 -Andre Goncalves: Towards a Voltage-Controlled Computer — Control and Interaction Beyond an Embedded System - page 92 -Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto and Shigeki Sagayama: Polyhymnia: An automatic piano performance system with statistical modeling of polyphonic expression and musical symbol interpretation - page 96 -Juan Pablo Carrascal and Sergi Jorda: Multitouch Interface for Audio Mixing - page 100 -Nate Derbinsky and Georg Essl: Cognitive Architecture in Mobile Music Interactions - page 104 -Benjamin D. Smith and Guy E. Garnett: The Self-Supervising Machine - page 108 -Aaron Albin, Sertan Senturk, Akito Van Troyer, Brian Blosser, Oliver Jan and Gil Weinberg: Beatscape, a mixed virtual-physical environment for musical ensembles - page 112 -Marco Fabiani, Gaël Dubus and Roberto Bresin: MoodifierLive: Interactive and collaborative expressive music performance on mobile devices - page 116 -Benjamin Schroeder, Marc Ainger and Richard Parent: A Physically Based Sound Space for Procedural Agents - page 120 -Francisco Garcia, Leny Vinceslas, Esteban Maestre and Josep Tubau Acquisition and study of blowing pressure profiles in recorder playing - page 124 -Anders Friberg and Anna Källblad:Experiences from video-controlled sound installations - page 128 -Nicolas d’Alessandro, Roberto Calderon and Stefanie Müller: ROOM#81 —Agent-Based Instrument for Experiencing Architectural and Vocal Cues - page 132 Demo session C — Monday 30 May 13:30–14:30 -Yasuo Kuhara and Daiki Kobayashi: Kinetic Particles Synthesizer Using Multi-Touch Screen Interface of Mobile Devices - page 136 -Christopher Carlson, Eli Marschner and Hunter Mccurry: The Sound Flinger: A Haptic Spatializer - page 138 -Ravi Kondapalli and Benzhen Sung: Daft Datum – an Interface for Producing Music Through Foot-Based Interaction - page 140 -Charles Martin and Chi-Hsia Lai: Strike on Stage: a percussion and media performance - page 142 Paper session D — Monday 30 May 14:30–15:30 -Baptiste Caramiaux, Patrick Susini, Tommaso Bianco, Frédéric Bevilacqua, Olivier Houix, Norbert Schnell and Nicolas Misdariis: Gestural Embodiment of Environmental Sounds: an Experimental Study - page 144 -Sebastian Mealla, Aleksander Valjamae, Mathieu Bosi and Sergi Jorda: Listening to Your Brain: Implicit Interaction in Collaborative Music Performances - page 149 -Dan Newton and Mark Marshall: Examining How Musicians Create Augmented Musical Instruments - page 155 Paper session E — Monday 30 May 16:00–17:00 -Zachary Seldess and Toshiro Yamada: Tahakum: A Multi-Purpose Audio Control Framework - page 161 -Dawen Liang, Guangyu Xia and Roger Dannenberg: A Framework for Coordination and Synchronization of Media - page 167 -Edgar Berdahl and Wendy Ju: Satellite CCRMA: A Musical Interaction and Sound Synthesis Platform - page 173 Paper session F — Tuesday 31 May 09:00–10:50 -Nicholas J. Bryan and Ge Wang: Two Turntables and a Mobile Phone - page 179 -Nick Kruge and Ge Wang: MadPad: A Crowdsourcing System for Audiovisual Sampling - page 185 -Patrick O’Keefe and Georg Essl: The Visual in Mobile Music Performance - page 191 -Ge Wang, Jieun Oh and Tom Lieber: Designing for the iPad: Magic Fiddle - page 197 -Benjamin Knapp and Brennon Bortz: MobileMuse: Integral Music Control Goes Mobile - page 203 -Stephen Beck, Chris Branton, Sharath Maddineni, Brygg Ullmer and Shantenu Jha: Tangible Performance Management of Grid-based Laptop Orchestras - page 207 Poster session G— Tuesday 31 May 13:30–14:30 -Smilen Dimitrov and Stefania Serafin: Audio Arduino—an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos - page 211 -Seunghun Kim and Woon Seung Yeo: Musical control of a pipe based on acoustic resonance - page 217 -Anne-Marie Hansen, Hans Jørgen Andersen and Pirkko Raudaskoski: Play Fluency in Music Improvisation Games for Novices - page 220 -Izzi Ramkissoon: The Bass Sleeve: A Real-time Multimedia Gestural Controller for Augmented Electric Bass Performance - page 224 -Ajay Kapur, Michael Darling, James Murphy, Jordan Hochenbaum, Dimitri Diakopoulos and Trimpin: The KarmetiK NotomotoN: A New Breed of Musical Robot for Teaching and Performance - page 228 -Adrian Barenca Aliaga and Giuseppe Torre: The Manipuller: Strings Manipulation and Multi-Dimensional Force Sensing - page 232 -Alain Crevoisier and Cécile Picard-Limpens: Mapping Objects with the Surface Editor - page 236 -Jordan Hochenbaum and Ajay Kapur: Adding Z-Depth and Pressure Expressivity to Tangible Tabletop Surfaces - page 240 -Andrew Milne, Anna Xambó, Robin Laney, David B. Sharp, Anthony Prechtl and Simon Holland: Hex Player—A Virtual Musical Controller - page 244 -Carl Haakon Waadeland: Rhythm Performance from a Spectral Point of View - page 248 -Josep M Comajuncosas, Enric Guaus, Alex Barrachina and John O’Connell: Nuvolet : 3D gesture-driven collaborative audio mosaicing - page 252 -Erwin Schoonderwaldt and Alexander Refsum Jensenius: Effective and expressive movements in a French-Canadian fiddler’s performance - page 256 -Daniel Bisig, Jan Schacher and Martin Neukom: Flowspace – A Hybrid Ecosystem - page 260 -Marc Sosnick and William Hsu: Implementing a Finite Difference-Based Real-time Sound Synthesizer using GPUs - page 264 -Axel Tidemann: An Artificial Intelligence Architecture for Musical Expressiveness that Learns by Imitation - page 268 -Luke Dahl, Jorge Herrera and Carr Wilkerson: TweetDreams: Making music with the audience and the world using real-time Twitter data - page 272 -Lawrence Fyfe, Adam Tindale and Sheelagh Carpendale: JunctionBox: A Toolkit for Creating Multi-touch Sound Control Interfaces - page 276 -Andrew Johnston: Beyond Evaluation: Linking Practice and Theory in New Musical Interface Design - page 280 -Phillip Popp and Matthew Wright: Intuitive Real-Time Control of Spectral Model Synthesis - page 284 -Pablo Molina, Martin Haro and Sergi Jordà: BeatJockey: A new tool for enhancing DJ skills - page 288 -Jan Schacher and Angela Stoecklin: Traces – Body, Motion and Sound - page 292 -Grace Leslie and Tim Mullen: MoodMixer: EEG-based Collaborative Sonification - page 296 -Ståle A. Skogstad, Kristian Nymoen, Yago de Quay and Alexander Refsum Jensenius: OSC Implementation and Evaluation of the Xsens MVN suit - page 300 -Lonce Wyse, Norikazu Mitani and Suranga Nanayakkara: The effect of visualizing audio targets in a musical listening and performance task - page 304 -Freed Adrian, John Maccallum and Andrew Schmeder: Composability for Musical Gesture Signal Processing using new OSC-based Object and Functional Programming Extensions to Max/MSP - page 308 -Kristian Nymoen, Ståle A. Skogstad and Alexander Refsum Jensenius: SoundSaber —A Motion Capture Instrument - page 312 -Øyvind Brandtsegg, Sigurd Saue and Thom Johansen: A modulation matrix for complex parameter sets - page 316 Demo session H— Tuesday 31 May 13:30–14:30 -Yu-Chung Tseng, Che-Wei Liu, Tzu-Heng Chi and Hui-Yu Wang: Sound Low Fun- page 320 -Edgar Berdahl and Chris Chafe: Autonomous New Media Artefacts (AutoNMA) - page 322 -Min-Joon Yoo, Jin-Wook Beak and In-Kwon Lee: Creating Musical Expression using Kinect - page 324 -Staas de Jong: Making grains tangible: microtouch for microsound - page 326 Baptiste Caramiaux, Frederic Bevilacqua and Norbert Schnell: Sound Selection by Gestures - page 329 Paper session I — Tuesday 31 May 14:30–15:30 -Hernán KerlleÃevich, Manuel Eguia and Pablo Riera: An Open Source Interface based on Biological Neural Networks for Interactive Music Performance - page 331 -Nicholas Gillian, R. Benjamin Knapp and Sile O’Modhrain: Recognition Of Multivariate Temporal Musical Gestures Using N-Dimensional Dynamic Time Warping - page 337 -Nicholas Gillian, R. Benjamin Knapp and Sile O’Modhrain: A Machine Learning Toolbox For Musician Computer Interaction - page 343 Paper session J — Tuesday 31 May 16:00–17:00 -Elena Jessop, Peter Torpey and Benjamin Bloomberg: Music and Technology in Death and the Powers - page 349 -Victor Zappi, Dario Mazzanti, Andrea Brogni and Darwin Caldwell: Design and Evaluation of a Hybrid Reality Performance - page 355 -Jérémie Garcia, Theophanis Tsandilas, Carlos Agon and Wendy Mackay: InkSplorer : Exploring Musical Ideas on Paper and Computer - page 361 Paper session K — Wednesday 1 June 09:00–10:30 -Pedro Lopes, Alfredo Ferreira and Joao Madeiras Pereira: Battle of the DJs: an HCI perspective of Traditional, Virtual, Hybrid and Multitouch DJing - page 367 -Adnan Marquez-Borbon, Michael Gurevich, A. Cavan Fyans and Paul Stapleton: Designing Digital Musical Interactions in Experimental Contexts - page 373 -Jonathan Reus: Crackle: A mobile multitouch topology for exploratory sound interaction - page 377 -Samuel Aaron, Alan F. Blackwell, Richard Hoadley and Tim Regan: A principled approach to developing new languages for live coding - page 381 -Jamie Bullock, Daniel Beattie and Jerome Turner: Integra Live: a new graphical user interface for live electronic music - page 387 Paper session L — Wednesday 1 June 11:00–12:30 -Jung-Sim Roh, Yotam Mann, Adrian Freed and David Wessel: Robust and Reliable Fabric, Piezoresistive Multitouch Sensing Surfaces for Musical Controllers - page 393 -Mark Marshall and Marcelo Wanderley: Examining the Effects of Embedded Vibrotactile Feedback on the Feel of a Digital Musical Instrument - page 399 -Dimitri Diakopoulos and Ajay Kapur: HIDUINO: A firmware for building driverless USB-MIDI devices using the Arduino microcontroller - page 405 -Emmanuel Flety and Côme Maestracci: Latency improvement in sensor wireless transmission using IEEE 802.15.4 - page 409 -Jeff Snyder: The Snyderphonics Manta, a Novel USB Touch Controller - page 413 Poster session M — Wednesday 1 June 13:30–14:30 -William Hsu: On Movement, Structure and Abstraction in Generative Audiovisual Improvisation - page 417 -Claudia Robles Angel: Creating Interactive Multimedia Works with Bio-data - page 421 -Paula Ustarroz: TresnaNet: musical generation based on network protocols - page 425 -Matti Luhtala, Tiina Kymäläinen and Johan Plomp: Designing a Music Performance Space for Persons with Intellectual Learning Disabilities - page 429 -Tom Ahola, Teemu Ahmaniemi, Koray Tahiroglu, Fabio Belloni and Ville Ranki: Raja —A Multidisciplinary Artistic Performance - page 433 -Emmanuelle Gallin and Marc Sirguy: Eobody3: A ready-to-use pre-mapped & multi-protocol sensor interface- page 437 -Rasmus Bååth, Thomas Strandberg and Christian Balkenius: Eye Tapping: How to Beat Out an Accurate Rhythm using Eye Movements - page 441 -Eric Rosenbaum: MelodyMorph: A Reconfigurable Musical Instrument - page 445 -Karmen Franinovic: Flo)(ps: Between Habitual and Explorative Action-Sound Relationships - page 448 -Margaret Schedel, Rebecca Fiebrink and Phoenix Perry: Wekinating 000000Swan: Using Machine Learning to Create and Control Complex Artistic Systems - page 453 -Carles F. Julià, Daniel Gallardo and Sergi Jordà: MTCF: A framework for designing and coding musical tabletop applications directly in Pure Data - page 457 -David Pirrò and Gerhard Eckel: Physical modelling enabling enaction: an example - page 461 -Thomas Mitchell and Imogen Heap: SoundGrasp: A Gestural Interface for the Performance of Live Music - page 465 -Tim Mullen, Richard Warp and Adam Jansch: Minding the (Transatlantic) Gap: An Internet-Enabled Acoustic Brain-Computer Music Interface - page 469 -Stefano Papetti, Marco Civolani and Federico Fontana: Rhythm’n’Shoes: a wearable foot tapping interface with audio-tactile feedback - page 473 -Cumhur Erkut, Antti Jylhä and Reha Di¸sçio˘glu: A structured design and evaluation model with application to rhythmic interaction displays - page 477 -Marco Marchini, Panos Papiotis, Alfonso Perez and Esteban Maestre: A Hair Ribbon Deflection Model for Low-Intrusiveness Measurement of Bow Force in Violin Performance - page 481 -Jonathan Forsyth, Aron Glennon and Juan Bello: Random Access Remixing on the iPad - page 487 -Erika Donald, Ben Duinker and Eliot Britton: Designing the EP trio: Instrument identities, control and performance practice in an electronic chamber music ensemble - page 491 -Cavan Fyans and Michael Gurevich: Perceptions of Skill in Performances with Acoustic and Electronic Instruments - page 495 -Hiroki Nishino: Cognitive Issues in Computer Music Programming - page 499 -Roland Lamb and Andrew Robertson: Seaboard: a new piano keyboard-related interface combining discrete and continuous control - page 503 -Gilbert Beyer and Max Meier: Music Interfaces for Novice Users: Composing Music on a Public Display with Hand Gestures - page 507 -Birgitta Cappelen and Anders-Petter Andersson: Expanding the role of the instrument - page 511 -Todor Todoroff: Wireless Digital/Analog Sensors for Music and Dance Performances - page 515 -Trond Engum: Real-time control and creative convolution— exchanging techniques between distinct genres - page 519 -Andreas Bergsland: The Six Fantasies Machine – an instrument modelling phrases from Paul Lansky’s Six Fantasies - page 523 Demo session N — Wednesday 1 June 13:30–14:30 -Jan Trützschler von Falkenstein: Gliss: An Intuitive Sequencer for the iPhone and iPad - page 527 -Jiffer Harriman, Locky Casey, Linden Melvin and Mike Repper: Quadrofeelia — A New Instrument for Sliding into Notes - page 529 -Johnty Wang, Nicolas D’Alessandro, Sidney Fels and Bob Pritchard: SQUEEZY: Extending a Multi-touch Screen with Force Sensing Objects for Controlling Articulatory Synthesis - page 531 -Souhwan Choe and Kyogu Lee: SWAF: Towards a Web Application Framework for Composition and Documentation of Soundscape - page 533 -Norbert Schnell, Frederic Bevilacqua, Nicolas Rasamimana, Julien Blois, Fabrice Guedy and Emmanuel Flety: Playing the "MO" —Gestural Control and Re-Embodiment of Recorded Sound and Music - page 535 -Bruno Zamborlin, Marco Liuni and Giorgio Partesana: (LAND)MOVES - page 537 -Bill Verplank and Francesco Georg: Can Haptics make New Music? —Fader and Plank Demos - page 53

    Dance Jockey: Performing Electronic Music by Dancing

    No full text
    The authors present an experimental musical performance called Dance Jockey, wherein sounds are controlled by sensors on the dancer's body. These sensors manipulate music in real time by acquiring data about body actions and transmitting the information to a control unit that makes decisions and gives instructions to audio software. The system triggers a broad range of music events and maps them to sound effects and musical parameters such as pitch, loudness and rhythm. Copyright 2011 ISAS

    Filtering Motion Capture Data for Real-Time Applications

    Get PDF
    In this paper we present some custom designed filters for real-time motion capture applications. Our target application is motion controllers, i.e. systems that interpret hand motion for musical interaction. In earlier research we found effective methods to design nearly optimal filters for realtime applications. However, to be able to design suitable filters for our target application, it is necessary to establish the typical frequency content of the motion capture data we want to filter. This will again allow us to determine a reasonable cutoff frequency for the filters. We have therefore conducted an experiment in which we recorded the hand motion of 20 subjects. The frequency spectra of these data together with a method similar to the residual analysis method were then used to determine reasonable cutoff frequencies. Based on this experiment, we propose three cutoff frequencies for different scenarios and filtering needs: 5, 10 and 15 Hz, which correspond to heavy, medium and light filtering, respectively. Finally, we propose a range of real-time filters applicable to motion controllers. In particular, low-pass filters and low-pass differentiators of degrees one and two, which in our experience are the most useful filters for our target application. Proceedings of the 13th International Conference on New Interfaces for Musical Expression. NIME’13, May 27 – 30, 2013, KAIST, Daejeon, Korea. Copyright remains with the author(s)

    Comparing Motion Data from an iPod Touch to a High-End Optical Infrared Marker-Based Motion Capture System

    No full text
    The paper presents an analysis of the quality of motion data from an iPod Touch (4th gen.). Acceleration and orientation data derived from internal sensors of an iPod is compared to data from a high end optical infrared marker-based motion capture system (Qualisys) in terms of latency, jitter, accuracy and precision. We identify some rotational drift in the iPod, and some time lag between the two systems. Still, the iPod motion data is quite reliable, especially for describing relative motion over a short period of time. Proceedings of the 12th International Conference on New Interfaces for Musical Expression. University of Michigan Press 2012 ISBN 978-0-9855720-1-3. s. 88-9
    corecore